13 research outputs found

    Predicting responders to prone positioning in mechanically ventilated patients with COVID-19 using machine learning

    Full text link
    Background: For mechanically ventilated critically ill COVID-19 patients, prone positioning has quickly become an important treatment strategy, however, prone positioning is labor intensive and comes with potential adverse effects. Therefore, identifying which critically ill intubated COVID-19 patients will benefit may help allocate labor resources. Methods: From the multi-center Dutch Data Warehouse of COVID-19 ICU patients from 25 hospitals, we selected all 3619 episodes of prone positioning in 1142 invasively mechanically ventilated patients. We excluded episodes longer than 24 h. Berlin ARDS criteria were not formally documented. We used supervised machine learning algorithms Logistic Regression, Random Forest, Naive Bayes, K-Nearest Neighbors, Support Vector Machine and Extreme Gradient Boosting on readily available and clinically relevant features to predict success of prone positioning after 4 h (window of 1 to 7 h) based on various possible outcomes. These outcomes were defined as improvements of at least 10% in PaO2/FiO2 ratio, ventilatory ratio, respiratory system compliance, or mechanical power. Separate models were created for each of these outcomes. Re-supination within 4 h after pronation was labeled as failure. We also developed models using a 20 mmHg improvement cut-off for PaO2/FiO2 ratio and using a combined outcome parameter. For all models, we evaluated feature importance expressed as contribution to predictive performance based on their relative ranking. Results: The median duration of prone episodes was 17 h (11-20, median and IQR, N = 2632). Despite extensive modeling using a plethora of machine learning techniques and a large number of potentially clinically relevant features, discrimination between responders and non-responders remained poor with an area under the receiver operator characteristic curve of 0.62 for PaO2/FiO2 ratio using Logistic Regression, Random Forest and XGBoost. Feature importance was inconsistent between models for different outcomes. Notably, not even being a previous responder to prone positioning, or PEEP-levels before prone positioning, provided any meaningful contribution to predicting a successful next proning episode. Conclusions: In mechanically ventilated COVID-19 patients, predicting the success of prone positioning using clinically relevant and readily available parameters from electronic health records is currently not feasible. Given the current evidence base, a liberal approach to proning in all patients with severe COVID-19 ARDS is therefore justified and in particular regardless of previous results of proning. Keywords: Acute respiratory distress syndrome; COVID-19; Mechanical ventilation

    Environment induced emergence of collective behaviour in evolving swarms with limited sensing

    Get PDF
    Designing controllers for robot swarms is challenging, because human developers have typically no good understanding of the link between the details of a controller that governs individual robots and the swarm behavior that is an indirect result of the interactions between swarm members and the environment. In this paper we investigate whether an evolutionary approach can mitigate this problem. We consider a very challenging task where robots with limited sensing and communication abilities must follow the gradient of an environmental feature and use Differential Evolution to evolve a neural network controller for simulated robots. We conduct a systematic study to measure the flexibility and scalability of the method by varying the size of the arena and number of robots in the swarm. The experiments confirm the feasibility of our approach, the evolved robot controllers induced swarm behavior that solved the task. We found that solutions evolved under the harshest conditions (where the environmental clues were the weakest) were the most flexible and that there is a sweet spot regarding the swarm size. Furthermore, we observed collective motion of the swarm, showcasing truly emergent behavior that was not represented in- and selected for during evolution.Comment: (1) Three authors contributed equally to this researc

    Learning directed locomotion in modular robots with evolvable morphologies

    Get PDF
    The vision behind this paper looks ahead to evolutionary robot systems where morphologies and controllers are evolved together and ‘newborn’ robots undergo a learning process to optimize their inherited brain for the inherited body. The specific problem we address is learning controllers for the task of directed locomotion in evolvable modular robots. To this end, we present a test suite of robots with different shapes and sizes and compare two learning algorithms, Bayesian optimization and HyperNEAT. The experiments in simulation show that both methods obtain good controllers, but Bayesian optimization is more effective and sample efficient. We validate the best learned controllers by constructing three robots from the test suite in the real world and observe their fitness and actual trajectories. The obtained results indicate a reality gap, but overall the trajectories are adequate and follow the target directions successfully

    Large-scale ICU data sharing for global collaboration: the first 1633 critically ill COVID-19 patients in the Dutch Data Warehouse

    Get PDF

    Adaptive Control for Evolutionary Robotics: And its effect on learning directed locomotion

    No full text
    This thesis is motivated by evolutionary robot systems where robot bodies and brains evolve simultaneously. In such a robot system, `birth' must be followed by `infant learning' by a learning method that works for various morphologies evolution may produce. Here we address the task of directed locomotion in modular robots with controllers based on Central Pattern Generators. We present a bio-inspired adaptive feedback mechanism that uses a forward model and an inverse model that can be learned on-the-fly. We compare two versions (a simple and a sophisticated one) of this concept to a traditional (open-loop) controller using Bayesian Optimization as a learning algorithm. The experimental results show that the sophisticated version outperforms the simple one and the traditional controller. It leads to improvement in performance and more robust controllers that cope better with noise

    The Effects of Adaptive Control on Learning Directed Locomotion

    No full text
    This study is motivated by evolutionary robot systems where robot bodies and brains evolve simultaneously. In such systems robot 'birth' must be followed by 'infant learning' by a learning method that works for various morphologies evolution may produce. Here we address the task of directed locomotion in modular robots with controllers based on Central Pattern Generators. We present a bio-inspired adaptive feedback mechanism that uses a forward model and an inverse model that can be learned on-the-fly. We compare two versions (a simple and a sophisticated one) of this concept to a traditional (open-loop) controller using Bayesian optimization as a learning algorithm. The experimental results show that the sophisticated version outperforms the simple one and the traditional controller. It leads to a better performance and more robust controllers that better cope with noise

    Comparing lifetime learning methods for morphologically evolving robots

    No full text
    Evolving morphologies and controllers of robots simultaneously leads to a problem: Even if the parents have well-matching bodies and brains, the stochastic recombination can break this match and cause a body-brain mismatch in their offspring. We argue that this can be mitigated by having newborn robots perform a learning process that optimizes their inherited brain quickly after birth. We compare three different algorithms for doing this. To this end, we consider three algorithmic properties, efficiency, efficacy, and the sensitivity to differences in the morphologies of the robots that run the learning process.Comment: Associated code: https://github.com/fudavd/revolve/tree/learnin

    The Influence of Robot Traits and Evolutionary Dynamics on the Reality Gap

    No full text
    The elephant in the room for evolutionary robotics is the reality gap. In the history of the field, several studies investigated this phenomenon on fixed robot morphologies where only the controllers evolved. This paper addresses the reality gap in a wider context, in a system where both morphologies and controllers evolve. In this context the morphology of the robots becomes a variable with a currently unknown influence. To examine this influence, we construct a test suite of robots with various morphologies and evolve their controllers for an effective gait. Comparing the simulated and the real-world performance of evolved controllers sampled at different generations during the evolutionary process, we gain new insights into the factors that influence the reality gap

    Environment induced emergence of collective behavior in evolving swarms with limited sensing

    No full text
    Designing controllers for robot swarms is challenging, because human developers have typically no good understanding of the link between the details of a controller that governs individual robots and the swarm behavior that is an indirect result of the interactions between swarm members and the environment. In this paper we investigate whether an evolutionary approach can mitigate this problem. We consider a very challenging task where robots with limited sensing and communication abilities must follow the gradient of an environmental feature and use Differential Evolution to evolve a neural network controller for simulated robots. We conduct a systematic study to measure the flexibility and scalability of the method by varying the size of the arena and number of robots in the swarm. The experiments confirm the feasibility of our approach, the evolved robot controllers induced swarm behavior that solved the task. We found that solutions evolved under the harshest conditions (where the environmental clues were the weakest) were the most flexible and that there is a sweet spot regarding the swarm size. Furthermore, we observed collective motion of the swarm, showcasing truly emergent behavior that was not represented in-and selected for during evolution
    corecore